基于自我监督的基于学习的预科可以使用小标签的数据集开发可靠和广义的深度学习模型,从而减轻了标签生成的负担。本文旨在评估基于CL的预处理对可转介的性能与非转介糖尿病性视网膜病(DR)分类的影响。我们已经开发了一个基于CL的框架,具有神经风格转移(NST)增强,以生成具有更好表示和初始化的模型,以检测颜色底面图像中的DR。我们将CL预估计的模型性能与用成像网权重预测的两个最先进的基线模型进行了比较。我们通过减少标记的训练数据(降至10%)进一步研究模型性能,以测试使用小标签数据集训练模型的鲁棒性。该模型在EYEPACS数据集上进行了培训和验证,并根据芝加哥伊利诺伊大学(UIC)的临床数据进行了独立测试。与基线模型相比,我们的CL预处理的基础网模型具有更高的AUC(CI)值(0.91(0.898至0.930),在UIC数据上为0.80(0.783至0.820)和0.83(0.783至0.820)(0.801至0.853)。在10%标记的培训数据时,在UIC数据集上测试时,基线模型中的FoldusNet AUC为0.81(0.78至0.84),比0.58(0.56至0.64)和0.63(0.56至0.64)和0.63(0.60至0.66)。基于CL的NST预处理可显着提高DL分类性能,帮助模型良好(可从Eyepacs转移到UIC数据),并允许使用小的带注释的数据集进行培训,从而减少临床医生的地面真相注释负担。
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
In this work, we investigate improving the generalizability of GAN-generated image detectors by performing data augmentation in the fingerprint domain. Specifically, we first separate the fingerprints and contents of the GAN-generated images using an autoencoder based GAN fingerprint extractor, followed by random perturbations of the fingerprints. Then the original fingerprints are substituted with the perturbed fingerprints and added to the original contents, to produce images that are visually invariant but with distinct fingerprints. The perturbed images can successfully imitate images generated by different GANs to improve the generalization of the detectors, which is demonstrated by the spectra visualization. To our knowledge, we are the first to conduct data augmentation in the fingerprint domain. Our work explores a novel prospect that is distinct from previous works on spatial and frequency domain augmentation. Extensive cross-GAN experiments demonstrate the effectiveness of our method compared to the state-of-the-art methods in detecting fake images generated by unknown GANs.
translated by 谷歌翻译
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
translated by 谷歌翻译
We shed light on a pitfall and an opportunity in physics-informed neural networks (PINNs). We prove that a multilayer perceptron (MLP) only with ReLU (Rectified Linear Unit) or ReLU-like Lipschitz activation functions will always lead to a vanished Hessian. Such a network-imposed constraint contradicts any second- or higher-order partial differential equations (PDEs). Therefore, a ReLU-based MLP cannot form a permissible function space for the approximation of their solutions. Inspired by this pitfall, we prove that a linear PDE up to the $n$-th order can be strictly satisfied by an MLP with $C^n$ activation functions when the weights of its output layer lie on a certain hyperplane, as called the out-layer-hyperplane. An MLP equipped with the out-layer-hyperplane becomes "physics-enforced", no longer requiring a loss function for the PDE itself (but only those for the initial and boundary conditions). Such a hyperplane exists not only for MLPs but for any network architecture tailed by a fully-connected hidden layer. To our knowledge, this should be the first PINN architecture that enforces point-wise correctness of a PDE. We give the closed-form expression of the out-layer-hyperplane for second-order linear PDEs and provide an implementation.
translated by 谷歌翻译
图形神经网络(GNNS)已被广泛用于许多域,在这些领域中,数据被表示为图,包括社交网络,推荐系统,生物学,化学等。最近,GNNS的表现力引起了人们的兴趣。已经表明,尽管GNNS在许多应用中取得了有希望的经验结果,但GNN中存在一些局限性,阻碍了他们对某些任务的绩效。例如,由于GNNS更新节点功能主要基于本地信息,因此它们在捕获图中节点之间的长距离依赖性方面具有有限的表达能力。为了解决GNN的一些局限性,最近的几项工作开始探索增强的GNN,并记忆以提高其在相关任务中的表现力。在本文中,我们对现有的记忆启发性GNN的现有文献进行了全面综述。我们通过心理学和神经科学的角度回顾了这些作品,后者已经在生物学大脑中建立了多种记忆系统和机制。我们提出了记忆GNN作品的分类法,以及比较记忆机制的一组标准。我们还提供有关这些作品局限性的重要讨论。最后,我们讨论了该领域的挑战和未来方向。
translated by 谷歌翻译
激活函数是元素的数学函数,在深神经网络(DNN)中起着至关重要的作用。已经提出了许多新颖和复杂的激活功能来提高DNN的准确性,但在训练过程中还可以通过反向传播消耗大量记忆。在这项研究中,我们提出了嵌套的正向自动分化(正向AD),专门针对用于记忆效率的DNN训练的元素激活函数。我们在两个广泛使用的深度学习框架(Tensorflow和Pytorch)中部署了嵌套的AD,分别支持静态和动态计算图。我们的评估表明,在相同的记忆降低率下,嵌套的前AD嵌套将记忆足迹降低到1.97倍,比基线模型降低了20%。
translated by 谷歌翻译
分布式隐私的回归方案已在各个领域开发和扩展,在这些领域中,多方协作和私人运行优化算法,例如梯度下降,以学习一组最佳参数。但是,传统的基于梯度的方法无法解决包含具有L1正则化的客观功能的问题,例如LASSO回归。在本文中,我们介绍了一个名为FCD的新分布式方案联合坐标下降,旨在在多方场景下安全地解决此问题。具体而言,通过安全的聚合和添加的扰动,我们的方案确保:(1)没有向其他方泄漏本地信息,并且(2)全局模型参数不会暴露于云服务器。最终,各方可以消除附加的扰动,以得出具有高性能的全球模型。我们表明,FCD方案填补了多方安全坐标下降方法的空白,并且适用于一般线性回归,包括线性,脊和拉索回归。理论安全分析和实验结果表明,可以有效,有效地执行FCD,并以低MAE度量作为在现实世界UCI数据集的三种线性回归的任务下作为集中方法提供的低MAE度量。
translated by 谷歌翻译
量化是一种降低DNN模型的计算和记忆成本的技术,DNN模型越来越大。现有的量化解决方案使用固定点整数或浮点类类型,这些量子的好处有限,因为两者都需要更多位以保持原始型号的准确性。另一方面,可变长度量化使用低位量化对正常值和高精度的分数对异常值的一部分。即使这项工作带来了算法的好处,但由于长度的编码和解码,它也引入了重要的硬件开销。在这项工作中,我们提出了一种称为ANT的固定长度自适应数值数据类型,以通过微小的硬件开销实现低位量化。我们的数据类型ANT利用了两项关键创新来利用DNN模型中的张贴内和调整的自适应机会。首先,我们提出了一种特定的数据类型Flint,该数据类型结合了Float和INT的优势,以适应张量中不同值的重要性。其次,我们提出了一个自适应框架,该框架根据其分布特性选择每个张量的最佳类型。我们为蚂蚁设计了统一的处理元件体系结构,并显示其与现有DNN加速器的易于集成。我们的设计导致2.8 $ \ times $速度和2.5 $ \ times $ $ $ $ $ \ times $ $ \ times $ $ \ times $ $ \ times $ $ \ times $ $ \ times $ $ \ times $ $ \ times $比最先进的量化加速器提高了能源效率。
translated by 谷歌翻译
训练后量化(PTQ)由于其在部署量化的神经网络方面的便利性而引起了越来越多的关注。 Founding是量化误差的主要来源,仅针对模型权重进行了优化,而激活仍然使用圆形至最终操作。在这项工作中,我们首次证明了精心选择的激活圆形方案可以提高最终准确性。为了应对激活舍入方案动态性的挑战,我们通过简单的功能适应圆形边框,以在推理阶段生成圆形方案。边界函数涵盖了重量误差,激活错误和传播误差的影响,以消除元素误差的偏差,从而进一步受益于模型的准确性。我们还使边境意识到全局错误,以更好地拟合不同的到达激活。最后,我们建议使用Aquant框架来学习边界功能。广泛的实验表明,与最先进的作品相比,Aquant可以通过可忽略不计的开销来取得明显的改进,并将Resnet-18的精度提高到2位重量和激活后训练后量化下的精度最高60.3 \%。
translated by 谷歌翻译